integrating different counterfactual assumption
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions.
- North America > United States > Florida > Broward County (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (0.97)
- Education > Educational Setting > Higher Education (0.50)
Reviews: When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
This paper tackles the primary criticism aimed at applications of causal graphical models for fairness: one needs to completely believe an assumed causal model for the results to be valid. Instead, it presents a definition of fairness where we can assume many plausible causal models and requires fairness violations to be bounded below a threshold for all such plausible models. The authors present a simple way to formally express this idea: by defining an approximate notion of counterfactual fairness and using the amount of fairness violation as a regularizer for a supervised learner. This is an important theoretical advance and I think can lead to promising work. The key part, then, is to develop a method to construct counterfactual estimates. This is a hard problem because even for a single causal model, there might be unknown and unobserved confounders that affect relationships between observed variables.
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Chris Russell, Matt J. Kusner, Joshua Loftus, Ricardo Silva
Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions.
- North America > United States > Florida > Broward County (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (0.70)
- Education > Educational Setting > Higher Education (0.30)
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Russell, Chris, Kusner, Matt J., Loftus, Joshua, Silva, Ricardo
Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions.
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Russell, Chris, Kusner, Matt J., Loftus, Joshua, Silva, Ricardo
Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal "world" is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.
- North America > United States > Florida > Broward County (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (0.97)
- Education > Educational Setting > Higher Education (0.50)